21 research outputs found
Real-Time Planning with Primitives for Dynamic Walking over Uneven Terrain
We present an algorithm for receding-horizon motion planning using a finite
family of motion primitives for underactuated dynamic walking over uneven
terrain. The motion primitives are defined as virtual holonomic constraints,
and the special structure of underactuated mechanical systems operating subject
to virtual constraints is used to construct closed-form solutions and a special
binary search tree that dramatically speed up motion planning. We propose a
greedy depth-first search and discuss improvement using energy-based
heuristics. The resulting algorithm can plan several footsteps ahead in a
fraction of a second for both the compass-gait walker and a planar
7-Degree-of-freedom/five-link walker.Comment: Conference submissio
Convex Identifcation of Stable Dynamical Systems
This thesis concerns the scalable application of convex optimization to data-driven modeling of dynamical systems, termed system identi cation in the control community. Two problems commonly arising in system identi cation are model instability (e.g. unreliability of long-term, open-loop predictions), and nonconvexity of quality-of- t criteria, such as simulation error (a.k.a. output error). To address these problems, this thesis presents convex parametrizations of stable dynamical systems, convex quality-of- t criteria, and e cient algorithms to optimize the latter over the former. In particular, this thesis makes extensive use of Lagrangian relaxation, a technique for generating convex approximations to nonconvex optimization problems. Recently, Lagrangian relaxation has been used to approximate simulation error and guarantee nonlinear model stability via semide nite programming (SDP), however, the resulting SDPs have large dimension, limiting their practical utility. The rst contribution of this thesis is a custom interior point algorithm that exploits structure in the problem to signi cantly reduce computational complexity. The new algorithm enables empirical comparisons to established methods including Nonlinear ARX, in which superior generalization to new data is demonstrated. Equipped with this algorithmic machinery, the second contribution of this thesis is the incorporation of model stability constraints into the maximum likelihood framework. Speci - cally, Lagrangian relaxation is combined with the expectation maximization (EM) algorithm to derive tight bounds on the likelihood function, that can be optimized over a convex parametrization of all stable linear dynamical systems. Two di erent formulations are presented, one of which gives higher delity bounds when disturbances (a.k.a. process noise) dominate measurement noise, and vice versa. Finally, identi cation of positive systems is considered. Such systems enjoy substantially simpler stability and performance analysis compared to the general linear time-invariant iv Abstract (LTI) case, and appear frequently in applications where physical constraints imply nonnegativity of the quantities of interest. Lagrangian relaxation is used to derive new convex parametrizations of stable positive systems and quality-of- t criteria, and substantial improvements in accuracy of the identi ed models, compared to existing approaches based on weighted equation error, are demonstrated. Furthermore, the convex parametrizations of stable systems based on linear Lyapunov functions are shown to be amenable to distributed optimization, which is useful for identi cation of large-scale networked dynamical systems
On the smoothness of nonlinear system identification
We shed new light on the \textit{smoothness} of optimization problems arising
in prediction error parameter estimation of linear and nonlinear systems. We
show that for regions of the parameter space where the model is not
contractive, the Lipschitz constant and -smoothness of the objective
function might blow up exponentially with the simulation length, making it hard
to numerically find minima within those regions or, even, to escape from them.
In addition to providing theoretical understanding of this problem, this paper
also proposes the use of multiple shooting as a viable solution. The proposed
method minimizes the error between a prediction model and the observed values.
Rather than running the prediction model over the entire dataset, multiple
shooting splits the data into smaller subsets and runs the prediction model
over each subset, making the simulation length a design parameter and making it
possible to solve problems that would be infeasible using a standard approach.
The equivalence to the original problem is obtained by including constraints in
the optimization. The new method is illustrated by estimating the parameters of
nonlinear systems with chaotic or unstable behavior, as well as neural
networks. We also present a comparative analysis of the proposed method with
multi-step-ahead prediction error minimization
Shortest Paths in Graphs of Convex Sets
Given a graph, the shortest-path problem requires finding a sequence of edges
with minimum cumulative length that connects a source to a target vertex. We
consider a generalization of this classical problem in which the position of
each vertex in the graph is a continuous decision variable, constrained to lie
in a corresponding convex set. The length of an edge is then defined as a
convex function of the positions of the vertices it connects. Problems of this
form arise naturally in road networks, robot navigation, and even optimal
control of hybrid dynamical systems. The price for such a wide applicability is
the complexity of this problem, which is easily seen to be NP-hard. Our main
contribution is a strong mixed-integer convex formulation based on perspective
functions. This formulation has a very tight convex relaxation and allows to
efficiently find globally-optimal paths in large graphs and in high-dimensional
spaces
Smooth Model Predictive Control with Applications to Statistical Learning
Statistical learning theory and high dimensional statistics have had a
tremendous impact on Machine Learning theory and have impacted a variety of
domains including systems and control theory. Over the past few years we have
witnessed a variety of applications of such theoretical tools to help answer
questions such as: how many state-action pairs are needed to learn a static
control policy to a given accuracy? Recent results have shown that continuously
differentiable and stabilizing control policies can be well-approximated using
neural networks with hard guarantees on performance, yet often even the
simplest constrained control problems are not smooth. To address this void, in
this paper we study smooth approximations of linear Model Predictive Control
(MPC) policies, in which hard constraints are replaced by barrier functions,
a.k.a. barrier MPC. In particular, we show that barrier MPC inherits the
exponential stability properties of the original non-smooth MPC policy. Using a
careful analysis of the proposed barrier MPC, we show that its smoothness
constant can be carefully controlled, thereby paving the way for new sample
complexity results for approximating MPC policies from sampled state-action
pairs.Comment: 15 pages, 1 figur